3 research outputs found

    Mitigating Off-Policy Bias in Actor-Critic Methods with One-Step Q-learning: A Novel Correction Approach

    Full text link
    Compared to on-policy counterparts, off-policy model-free deep reinforcement learning can improve data efficiency by repeatedly using the previously gathered data. However, off-policy learning becomes challenging when the discrepancy between the underlying distributions of the agent's policy and collected data increases. Although the well-studied importance sampling and off-policy policy gradient techniques were proposed to compensate for this discrepancy, they usually require a collection of long trajectories and induce additional problems such as vanishing/exploding gradients or discarding many useful experiences, which eventually increases the computational complexity. Moreover, their generalization to either continuous action domains or policies approximated by deterministic deep neural networks is strictly limited. To overcome these limitations, we introduce a novel policy similarity measure to mitigate the effects of such discrepancy in continuous control. Our method offers an adequate single-step off-policy correction that is applicable to deterministic policy networks. Theoretical and empirical studies demonstrate that it can achieve a "safe" off-policy learning and substantially improve the state-of-the-art by attaining higher returns in fewer steps than the competing methods through an effective schedule of the learning rate in Q-learning and policy optimization

    Deep Reinforcement Learning Based JointDownlink Beamforming and RIS Configuration in RIS-aided MU-MISO Systems Under HardwareImpairments and Imperfect CSI

    No full text
    We investigate the joint transmit beamforming and reconfigurable intelligent surface (RIS) configuration problem to maximize the sum downlink rate of a RIS-aided cellular multiuser multiple input single output (MU-MISO) system under imperfect channel state information (CSI) and hardware impairments by considering a practical phase-dependent RIS amplitude model. To this end, we present a novel deep reinforcement learning (DRL) framework and compare its performance against a vanilla DRL agent under two scenarios: the golden standard where the base station (BS) knows the channel and the phasedependentRIS amplitude model perfectly, and the mismatch scenario where the BS has imperfect CSI and assumes idealRIS reflections. Our numerical results show that the introduced framework substantially outperforms the vanilla DRL agent under mismatch and approaches the golden standard.QC 20221114This manuscript has been submitted to a conference and is currently pending approval from arXiv for preprint upload. Identifiers will be provided as they become available.</p

    Deep Reinforcement Learning Based JointDownlink Beamforming and RIS Configuration in RIS-aided MU-MISO Systems Under HardwareImpairments and Imperfect CSI

    No full text
    We investigate the joint transmit beamforming and reconfigurable intelligent surface (RIS) configuration problem to maximize the sum downlink rate of a RIS-aided cellular multiuser multiple input single output (MU-MISO) system under imperfect channel state information (CSI) and hardware impairments by considering a practical phase-dependent RIS amplitude model. To this end, we present a novel deep reinforcement learning (DRL) framework and compare its performance against a vanilla DRL agent under two scenarios: the golden standard where the base station (BS) knows the channel and the phasedependentRIS amplitude model perfectly, and the mismatch scenario where the BS has imperfect CSI and assumes idealRIS reflections. Our numerical results show that the introduced framework substantially outperforms the vanilla DRL agent under mismatch and approaches the golden standard.QC 20221114This manuscript has been submitted to a conference and is currently pending approval from arXiv for preprint upload. Identifiers will be provided as they become available.</p
    corecore